Discussions Redesign & AI Disscussions

Share
Share
Video Transcript
Thank you. Hi, Sam Garlica. I'm the find manager over our collaborations team. Reliever seems to be clear as like inbox, discussion, actions, and groups, various little people things. Anything where someone can talk to someone else, that's usually my department. And lately, we've been focusing on the discussion rebuying.

This is something we've started in twenty twenty one. So it's been a while. And then mid last year, we actually kinda finished it. And by that, we mainly, you know, finished developing the last feature, the last piece of functionality. But really our goal for the redesign was David.

Legacy discussions is a little dated. It looks a little old. Is not really accessible anymore, and it wasn't responsive. So a lot of the work we did, a lot of the changes we made, were because of responsiveness. We wanted to really ensure that no matter what screen size users we're using, it was gonna work and they were gonna have a good experience.

We also did a lot to ensure sensibility compliance. Color contrast, buttons during a user who's using a screen reader go through the whole thing and understand and not get lost. So, I do wanna point out, this is these session announcement redesign. This is only affecting course announcements, not global announcements. But really it's just the UI changing for course announcements.

It's gonna match the discussion creation UI and the interaction you'd like there's no functionality changes to announcements. I do just want to point it out because if you're returning on the feature preview early, it will also say announcements, and I know that's tripping a couple people up. But kind of more into our UI and UX updates, We did a lot. And it does look very different from how the legacy looked. We did remove some excessive lines to clean up the interface, make look a bit more like other tools that our users are using.

We also moved a lot of the buttons and functionality that will change the viewing experience of the entire discussion thread to the top of the page. So this is where we put the sorting options, newest to oldest newest. We also added some filtering options so you could filter down to just on red posts or red posts. A bit more functionality. We did also update our search.

So this search is specific to the discussion thread. The one on the discussion index page is still the thing that hasn't changed. The last transaction thread will search an entire post for any content. It'll also search the user's name who have posted it. And then we'll highlight any information that comes reference.

And then this image to the side is just kind of showing a confidence. I have that down to a mobile view on my desktop. We really just wanted to ensure we're covering all use cases. We know instructors are grading on the go. They're grading on tablets.

You know, sometimes you just have five or ten minutes you're there and we wanna sure we can accommodate that and make that easier. So part of this is also making sure the mobile app experience is similar. While my team doesn't oversee the mobile apps, We have been working very closely with the mobile team to get in as much functionality as if possible and realistic. There's something that I'll cover later on, in line versus split view. That really doesn't make sense in a mobile app context.

But we wanted to ensure if users are switching between the two, they're getting a same experience. You don't want there to be messing functionality or to have it look completely different so you're then searching for the button. I know I have all the time and other things, and it's just If not fun, it adds time, and we wanna reduce that mental load. So what's new in the redesign? I know I've talked a lot about, you know, what we did to change it and to update it, and how we are trying to become compliant. But along the way, we got to add some fun new things.

One of them was this flexible viewing experience. And this is kind of the first hurdle of the redesign that we encountered, in the feature group. When this was a year and a half ago, but this came up. But we had a lot of feedback about how we were intending to release the redesign and that we would only be having split view So instructors and students would no longer be able to use it in the traditional kind of inline nested view that they were used to. I didn't go over very well.

I couldn't assure you. Luckily that was before me. What we did, we listened We stopped what we were doing. We got all of this feedback and we decided. So while there is value in what we built, there's also value in what we had.

And so we changed it so users can switch between the two experiences. This is done just by a button doesn't fret. You can select it at any time and go back and forth. They will remember your choice, though, if you'd leave me a callback. So in line, very traditional.

Everything's on one page. You can either expand all or nest all comments. And things are nested. Let you will open a tray on the right side of the page, go to the original post, and then any threaded posts will be used. I've had a lot of great feedback on this now.

We're allowing users to kind of choose their own story. Part of it is it works better for some newer urgent speaking because it is a little more focused in that side tripe. But it can also work better for some grading experiences. It really is just having that choice being able to choose depending on the situation and depending on your workflow. So something else we added, and this is the thing we received the Most feedback on and it's all been extremely positive is at mentioning.

So very similar to the Slack functionality, It's based on the course roster and who the students have access to. You'd want them mentioning someone who isn't in the course at all. But students are able to at mention, you know, whoever they're replying to, maybe their instructor, they have comments or need clarification on the discussion post. This sends a notification and the notification links directly to the post that they were mentioned on. We found, especially University of Minnesota who has already adopted this in some part.

That this is really increased engagement. It's a lot more like real time. And just allows for a more natural flow a conversation. Again, I'll touch on this again. We did improve the search.

It's actually the wrong image, but we improved the search. It does allow for text highlighting. We are working to add back images to it. That was something we missed. But one of our amazing community members pointed it out and we are working to add that back in.

So it's not just how it worked in legacy. The small thing we added was role pills for faculty recognition. So as instructors are going through and fire thread or adding comments or maybe a TA is helping grading or facilitating conversation. It's very clear to students that, you know, their faculty is participating and is aware of what they're doing. It also helps if, you know, of course has two, you know, nates in it, you know, which one is this? Is this teacher Nate or is this student Nate who am I talking to? Reply quoting is something we also added.

This we really haven't seen used for have those initial posts that are responding to someone else but aren't graded. So the post could be pre four posts a page or two away. This is providing greater context and just, you know, keeps you from scrolling up and down We're trying to save some time here. A big thing we added So this can be set up on the create edit page. There's anonymous and partially anonymous discussion.

So, of course, this isn't for every use case, but anonymous. This is fully anonymous. Once it's created, you cannot go back. We really want to preserve an amenity here and, you know, some discussions, you just need to be able to speak a bit more freely or if you're trying to get feedback from students. That'd be a fantastic use case.

And partially anonymous, it's got a clue that students haven't employed. Choose if they wanna be named, we've got. Because of the enemy some functionality is disabled. Like that mentioned, the last thing we needed was someone to mention a student and have then you broke it off for them. Something else we added.

There's a quite a few things. Discussion posts reporting. So this is mainly for instructors. But it allows a student to flag a post as appropriate as offensive or other. And kind of look at the broad category there.

But what it does is when a reply is reported, it sends a notification to the instructor. The instructor is then able to review the post and decide, you know, did this warrant flagging? Was it an error? And then can delete the post if need it. You know, moderating in the big thing, especially in large courses. And this kind of empowers students to do a bit more of their own moderating. The notification you see does link directly back to the post that was reported.

And the little it's kinda hard to see, but the little red one on the post, that will only show to the instructor. It doesn't show to students. The student of whoever's reply was reported. They don't get a notification. That's the score.

Instructor to moderate and to make a decision to have ownership of their courses. A history is another one we added, as a PBS campus student for seven years. I'm very familiar with kind of work around. Some students will, you know, post a blank post to get around the Let's, you know, post before seeing other user responses. You know, this is giving insight to those who are trying to do that work around.

It also has the, you know, great benefit of allowing you to see who is actually improving their posts. And you gotta repeat student who is putting minimal effort into their posts. This way you can see if you discussion with them? Did they actually go back and make changes? Did they go back and add new context or understanding after another lecture? Know, to see those those changes over time. And in this feature, the view history under the post, you see the time it was edited. And then when you open the modal review history, it'll have the different versions.

I think they're very different. One of them is just kind of blore mips them, one of them's an actual post. So a lot of things, all content. But when we finished kind of that last feature we did last year, I went into the community posted I replied to you hello, in person. I am in there quite a lot.

But we started looking at when when can we enforce this? I know that's no one's favorite thing. It's certainly not mine. But with our feature previews, at some point, we do need to take that next step. And that not only allows our team to deprecate old code and focus on new, but it ensures we're getting the new functionality we need more accessible versions to our users and to all of them. That being said, we decided July twentieth.

Was the date we were going to work it in production. That was I initially posted in July in not July. In October about it, kind of announcing that nine month transition timeline. And because that's a feature preview, anyone can go and test it now. They could turn it on in a subaccount, they can turn it on in a single report.

It is backwards compatible and no no migration. You guys don't need to do anything. It's gonna be a click of a button, and then you'll have a nice bright refreshed UI. And we we know that. We've tried very hard to test and reproduce.

And I would say overall about forty percent of our traffic is already using the read design. So it is being only tested internally but heavily tested by our users. And our users are leaving feedback. Some of the feedback we've received is a mister thing or two. One of them is these hyperlinking users to their own files.

So we are looking at adding that back in. Another thing is for accessibility reasons, we removed the little blue dot that was next to unread discussions, was actually a toggle allowing instructors to mark a post as red or unread. You didn't realize that was such a heavily used workflow. So what we're doing that I actually tasked my designer on my team I need a button. Create me a new button that's accessible.

So we can add that back in because currently it's a two click process and that's just It's too many clicks. You're integrating workflow. The last thing you need is to constantly be scrolling across the page, opening a menu and going down. So we are in the middle of creating a new button that will be exactly where that little indicator was. Look very similar.

But with its nature of being a button now instead of a toggle, it will be accessible to all users. There won't just be a weird element on a page that has no context. He doesn't reading anything. So as we kind of go on, discussions, you know, this can be hired by anyone at any time. Can go back if you're not liking it.

I highly encourage you. Please post feedback. I am in the community every day reading everyone's post. Can't respond to anyone. There's a lot of you, one of me, but I am reading everything and we are considering every piece of feedback we're getting.

We may not be able to do all of them for one reason or another. Changes apart. I know some of the changes we made were for good reasons and that doesn't make it any easier. So a lot about the redesign before we go into Q and A, I kinda wanna touch on what's next. Because I know you guys have heard a lot about the redesign over the last couple of years.

We're a radio silent for a little while, and I apologize for that. The UI UX update with, you know, a couple new pieces of functionality isn't groundbreaking. It's isn't innovative. So what are we doing next? Next is checkpoints. So we have kind of straight away from calling this multiple due dates because we already have assigned to functionality that is often referred to in the same manner.

We wanna make sure we're not confusing our users with feature is way. But checkpoints will allow instructors to set a initial due date for an initial post and a response due date. Both these things will show on Student side and the instructor side as two separate due dates under one of sixty. So you're not cluttering your grade book, you're not, you know, duplicating, you know, assignments you're creating, you're creating one thing that has, you know, additional facets to it. So on this screen, this is just the overall gel backup and preface.

There's still an active development on this. We haven't really worked on the front end yet, but we have heavily been working on the back end to ensure their how assignments currently work will accommodate this. So there are some things that might change between now and when we actually get this into production. But from here, you can see that we have a due date panel on the side. The initial topic due date that was completed.

And then the response due date that hasn't passed yet. From here, this is kind of more teacher views. When you do our then you'll have assignment settings. This will just be an option you're choosing like anything else. And when it is chosen, you'll have the options to you know, select like anything else who you're assigning it to.

The initial due date, the response due date, you'll also be able to specify how many replies you're wanting to require. You know, in every discussion I've ever participated in as a student, this is always something the teacher or the instructor has had to, you know, manually say manually enforced too. This way, you know, this is just provided in the normal workflow. Students are seeing this, you know, on their calendar, on their to do, will have all this relevant information in it. You'll also be able to split the number of points per initial response versus replies.

Maybe if you're requiring four replies and only one initial post, you want the points weighted differently. Or if one kid's just, you know, really good at those initial reports, but really not so good at those replies, you can also reflect that. In Cedar in the traditional grade book, you'll be able to input, you know, different points per checkpoint. And it will automatically total. You toggle this a lot of off of how rubrics and the grading works for rubric.

You know, you have the choice to override. So you also have the choice to for the student. It will show kind of, again, one initial assignment, few things nested under it with relevant information. And then on the student grants page, part of why we were making the decision to go with, split points for each checkpoint. Then I can have greater clarity on, you know, what where did I get my points? Where did I lose my points? And then you can put comments for each area really been good feedback to the student and also see which one they relate on.

Some other things that are in progress is we are finally adding some basic email functionality to our inbox. So out of office messages, you know, there's a very difficult workaround for that right now that just doesn't work. But we were adding that in, and we're also adding in a signature box. Honor, gonna be the days of manually adding your signature every time you need to reply to an email. Some basic things, but just really needed functionality in our inbox.

Adding this area, inbox settings also allows us to add more things in the future. So now I'm getting more fun stuff. What's the discovery? So while my team has been focusing their time and attention on bug fixes for the discussion redesign, you know, getting those last minute improvements in and working on checkpoints. I've started shifting my attention to what are we doing next? So one of these things, the kind of one is turn it in. Need turn it in in discussion.

This isn't a small endeavor for us. It means we are adding LTI support to discussions. It'll still be kind of a multi team effort, to get that in there between some internal teams and turn it in. It also means we'll likely have to add add a new area or new view to discussions. To allow instructors to easily see, you know, those similarity reports, file students to see if they're just, you know, I'll go out a hundred.

This is definitely plagiarism right off the bat or, you know, there's something else. We're also spending a lot of time on how can we utilize AI in our discussions. I know it's kind of the hot topic, and I know people will be speaking more on it later, but, you know, automated content moderation. I feel like this is probably less important in higher ed, but I know it's a big ask for our k twelves who have more impulsive students be there. But also prompt generation, you know, how can I write a better discussion prompt and more engaging one? And then analytics.

So, and analytics is kind of multifaceted. You know, how are my students doing? Are they grasping the, you know, concepts? Do they understand the posts? Are they using keywords in the correct context? Next. Using all that information, we can also start to look at different versions of discussions. You know, maybe some are all manually graded, maybe some are auto graded. We're just reviewing it before posting grades.

So There's a lot of things we can do that'll make large courses easier to facilitate. But also really provide good information and A lot of it comes down to just having that information so you can then make decisions with it. I don't think I made it on here, but I'm also working with a couple different groups. Talk about what we can do to sections and groups. So it's a very dated area of canvas.

And there's a lot of very basic things we could do like labeling. Now, could we actually label inactive sections or students? Can we actually add and remove people on the same page for sections and groups? And then just really making sure our permission structures are working. I hear quite a bit a lot how sections are only used by admins, by IT. But groups don't have enough functionality, so sometimes people get to use sections anyway. So we're looking at that.

We're looking at how we can I've made the dividing line between the two pieces of functionality or the two features and going from there. So you will likely be hearing more from me in the future asking questions probing, to see where the challenges are. Just wanna touch on some resources real quick. We have the feature group in Canvas discussion of our design. I'm also a frequent number of the product log.

You'll see me in there often. And I I try to reply to as many people as I can who have questions on both hold areas. We'll also soon have a two pager and an overview kind of showcasing video I know it's been difficult for a lot of people to grasp that this is not new quizzes, that we're not asking for migration. We don't want you to have to spend a lot of time on that. And so we're putting together an overview video to really highlight what is different, what is the same, and what am I going to have to manage? Because even if it's no change at all, evolving his change management.

There's always something different. I kinda open up to q and a. I know I covered a lot. Alright. Where'd you, from FIU, Water International University? Have a professor that uses discussions heavily like most professors probably do.

And one of his main questions that he always has is the notification is. So how he works his course is that agreed for a guy that and at any time, he wants it to be notified, via the email. So is this new design going to keep that notification process and then also, in the new design, will you be able to reply as a teacher to that gets sent. That's all he wants. Yeah.

The the notification structure hasn't changed really between the legacy and the redesign. The wanting a notification every time someone replies no matter when is something I've heard before. So it is on my list of discovery. It likely won't be in this redesign, but it's something we're looking into into in the future. Because I know you could set the the times the notification he's had about Yeah.

And I've had a couple of people ask for even if the discussion is not closed but past the due date, I still want to be notified if there's participation in it. There there was another part to that question I think you're asking. But being able yeah. Like, with the, inbox, how you can reply from your email, Can you for discussions, could you do that too at some point? It's something we can look into. It's not part of the redesign though.

It would depend on kind of where we go in the future. And if we're adding a new area in discussions, does it make more sense to have, you know, better sitting in messaging capabilities within SpeedGrader discussions, or do we kind of need to move that into inbox or email response as well? So when, multiple due dates for your mind that you were talking about, How does that interact with rubrics? Would I be able to have like one rubric for your initial post and then that different? Rubric part hasn't been finalized because another team is actively working on rubrics. We'll kind of figure that out as we both get closer to the end. That is something we want, you know, again, give points where we can. So maybe it doesn't make sense to have two rubrics, one for initial or post and one for responses.

Maybe it just makes sense to have one for the overall assignment. The different sections within it. So that's something we are looking into with, the technique of this rubric. With the plagiarism infection. Are you looking at the body of the description post and the attachments? So the one we're looking at initially is turn it in, but we wanna make sure we're looking at other tools as well.

So it'll likely be how turnitin works on assignments already. And I believe they do check attachments as well. So, normally, yeah, that normally the the score would be associated with the file submitted or with with the actual submission, but discussions have both. They have a a text area and they could have an attachment And I know it's very common for workflow for our professors. Most your paper in the discussion forum and comment on two other people's papers.

And so we would want both of those things to be reviewed by turnitin. Yeah, so we're still very early days. I believe I'm talking to a turn it in actually in a couple of weeks to kind of get their side and what they're willing to do. Yes. And that's kind of the unique challenge of adding it to discussions is that we're not just looking over a single file attachment.

You know, we want to make sure we are reading all of the text that the user is writing, you know, and checking, you know, maybe how similar it is to other student folks. Are they maybe rewording a couple of things? So that's something we're really looking into and have you're growing out the challenges with as we go. I could I could envision, like, we use the plagiarism plane. We have both framework and LTIs enabled. We give people as a flexibility, way too much.

But, I could envision a solution where there's two checkboxes, plagiarism check on the comment, change, plagiarism check on the attachments as well. Absolutely. And with it, we're also looking at where and how are we displaying that information to the instructor? Does it make sense to show it close by post, or do we need kind of a cohesive view where you're able to see all student one student's post to one spot. If that makes sense then, you know, do they have a theme going on? Alright. So what was your, on the beginning of the catalog at your release date before I went to the database? We don't have one yet.

I do I do believe it will shift to a q two. The backup has been more challenging than we're expected. Simply because we are doing a lot to how assignments work, which is bad for our team because it's taking us longer, but really good for other teams in Canvas, because it means like, theory used to getting a note, for example. So we're trying really hard, to make sure the we're doing is gonna benefit other teams as well. Yeah.

We were told what our team originally do. So that that makes sense. Yeah. Here we go. Yeah.

I think it's a little spooky, is your enforcement date mid semester of our summertime. So, we originally had it a launch date for important for us you know, both the medical personnel date would be in August the end of the term. So it's a hundred percent July twentieth that because we're gonna have to move it, which is fine. You know, the you're going to start now, but, I mean, we have to make our lives change on the beginning of summer. We can't have a change.

Thanks to Australia. Hello? So parts were sticking with it. There hasn't been feedback around the edge features or huge functionality things. I do believe we tried to release this once before and it was just about no, we're not gonna do that. Because there were the the inline split you.

Yep. If something like that does come up before the enforcement, we'll, of course, move it, but we want Walton sure, you know, if we're telling you something, we don't wanna go back on it later and make more work for you because we got you all prepared and then didn't follow-up if you changed it to, like, August, like, end of birth term, I don't I would hope not many people can complain about that. I can't speak for everybody here, but, yeah, I could see panic if we didn't make sure we do it early. So yep. One difference for this as of our normal releases, that it is a feature preview.

Right. So if you need it on two weeks earlier, you can turn it on two weeks earlier. Yes, exactly. That's what that's what I'm saying. Like, we're just gonna have to move our launch date in August to Right.

Because that way you can start a summer job. But it's a community that there are any resources we can make that will help with that. Yeah. It's just the change in semester. Yeah.

The resources provided me. The mention it peer review kind of triggered this for me. It might not be your area, but just you you might know. For the peer review, I think currently you use the rubric that's already with the assignment. Is there any potential of having a couple of rubrics, one that students would use for peer review? And then the one that the instructor would use for the overall.

No. No. I only sat in on a couple early discussion. Every calls for rubrics. They do know they at that time, they were uploading the idea of having multiple rubrics for an assignment.

I don't know where they're at with that. Another team up with that. And so I have two questions. I'll just share them now and you can decide the order. What is a pretty straightforward for the pills that mentions the teacher and ta role? Would it work with custom roles that we've created based on that? And then the split view, how does that work with the screen reader? So which is the order that would interpret it.

So custom roles, unfortunately, it doesn't support role field doesn't support custom roles at this time. It is something we received feedback on, so it's something we can improve on in the future. Part of it is we you know, needed to stop development at some point to kind of release it and see, you know, what we did need to change and what was okay. That is something I've heard before. For the screen reader and the web view.

So inline kind of works as you expected. Hopefully it's working better than it did in legacy but in the split view, we utilized a tray that opens on the right. A lot of conversations about this. So let me know if I go too far or too wide. So as a user is navigating, you know, they're tabbing through it, they're opening it.

It moves their focus from the post they're on into the tray. They're then isolated from that tray until they close out of it, and then it puts them back where they left. So they're able to navigate through the tray, read all the responses. And then when when they're done, they can go back and still be where they left off. What was the initial post or let's say? So let's say they went to the initial post, there's a reply.

They x out they tap through, they x out, and then they come back to the initial. But what if they're in the second post? The same expected behavior, then the go to open tray, read the tray, and then go back to the second post. Edit this real quick and make it bigger if I can. I can't. I could I could So it doesn't matter where you are kind of on the split view.

This tray doesn't show at all times. It'll only open when you're opening any responses or the post the initial post of any student. And then when it opens, It shows the initial at the top, so the context is still there. And then a user can go down, go through all their replies, and when they're done, you're exiting the tray and they're going back to where they left off off the people's house. Right.

But that's assuming their initial. Let's say they're three down. I'll take them to the third one down when they are, but we'll assert all the weights again. In the split view, and the user's not going to see any responses. So in line, when you have the initial post, and you, like, click view replies that it spans down and shows everything there.

In split view, when a user is doing that, it opens the tray instead of expanding. And then the sub replies, like, what would we so the reader would read it, that it'll all be individualizing. Okay. So it will go main then secondary then. Let's say third level post, fourth level post.

Yeah. That is one change between legacy and redesign. It only goes to, like, three nested. And part of that was an accessibility issue, there was no small we could make things before it's just unreadable. Yes.

Yeah. It'll continue to go down. They'll see that it's nested. And it's really not until they're exiting the trade that they're brought back to the main discussion you. On the post they left off on.

Good question. Yeah. Anyone else? Do at mentions allow students to, mention across cross listed sections, or does it restrict them to the people in their believe so. So it's all based on the search list. There was a bug in there that we fixed that was limiting into, like, the inbox list to that's not quite right.

So it should. And if it doesn't, please let your CSM know, and that's something we can work on. Saw another hand over here. Oh, and I'm sure it's exactly the same. So We've had a lot of faculty have asked to disable the capability for students to message each other individually Were there any added functionality on, with messaging on on this release or? Not in this one.

That is kind of a global clean I'm looking into a lot though is, you know, adding those not by moderation settings but adding those restrictions for when instructors need them. Okay. Thanks. Does the, the version history, also include deleted posts? Like, I'm sorry, deleted replies. Remember.

I'm so sorry. I could I can find out before I leave today for you. But deleted posts just always show. I don't remember if they show edit history or not because they're deleted. I don't think they do.

For the nature of how we're treating deleted posts. But again, that is something I looked at because every time an admin or instructor to see something that's been deleted, they have to contact us, and that's just something we could reply. And give you access to the information that you should already access to. Any other questions? I'll be around later if you don't have one right now. Go back up here.

Okay. Good. Okay. So this is where Zack Pendolyn and I are gonna, split a little time. I'm gonna talk about the thirty thousand foot level and what we're seeing across the industry.

And that was probably the smartest person. I know in the field of AI. Being a lost partner every time I meet with him. He's gonna dive into what we're actually doing within our products themselves. I'm the one with slightly more hair.

One of those tools that or one of those terms that gets thrown around a lot. AI has been around for a hundred years. It is computer science, paired with the data set? All problems. That's the definition of AI. Pretty simple.

What we talk about now, the more common term recently is generative AI. Please kicked off on my birthday, November thirtieth, of twenty twenty two. And this is when, Chad GPT launched. And I went to a million users in, like, five days. It was the pairing of a large language model, which is a a data set.

And on literally billions now trillions of data points combined with an easy to use interface. Right? That suddenly gave such good feedback people are just, you know, shocked right off the bat. And if you use it at all, you'll see it's really good at some things. It's not so great at others, but it's used to generate text. Code images, video.

Remarkable. It's been like revolutionary. And over the last year, I really do think we've gone through the the five stages of brief. Right? The initial denial. I think there was a lot of it'll go away.

You know, like, Homer Simpson even yeah. Homer Simpson said, the internet is that thing still around? I think some people will look at AI the same way. Right? We wanna move beyond that. It was right as we were coming out of the COVID. We're going back to normal.

These were about to hit normal again, and then we got hit by this. Right? The only constant is change. We have to flow, like, through the wind. Right? Anger. I think we saw that initial Bennett.

Bennett block it. Keep it off our campus. We all know where that goes. Students find a way to use it regardless. And see, we see most of those campus wide bands repealed already.

Bargaining, really going to the the, you know, say, what can we do to to make this? And and then there literally a lot of these conversations were, how do I let my teachers use it to use it without my students? Which is a strange approach, but we saw a lot of that. Depression. I think we all felt some level of that at some point that throughout this process. And now accept it. So I think we really are.

But the phase where, this year, I really do think, is is the year of acceptance. Right? We've gone through the year of turmoil. Now we're on this path to who how do we actually make this effective? In fact, we put these tools to work. And so we've seen I don't think I've seen as many startups as many, new logos since the dot com bubble. It's incredible how many how many companies are out there with a hammer looking for nails.

Right? So there's a lot of this out in the field. The biggest barrier is fear. Right? We were we've been conditioned through fear AI. Right? Nineteen eighty three. Walker was gonna blow up the world because wanna play chess with it.

Right? Nineteen eighty four, the Terminator. There's actually, a campaign to stop killer robots. Really, like, it's an anti AI. I had a turning point last year. I was doing a webinar with Sheryl Barn from MIT.

And I said something about hallucinations. And she said, stop calling it that. I was like, what do we call it? She's like, no. You're anthropomorphizing something that is not human. And so it kinda put me down this path of she's right.

We talked about this in a way that they're, you know, even even some of our partners have, tools that they call by female names or by human names. Right? We talked about this in a way that, we we give human characteristics to what is not in any way, tool. Right? These are simply tools like auto complete that's trying to give us the information we want. It's trying to answer us. And even those those hallucinations that we see, our its attempt based on the data that it has to give us what we want to answer the question we're giving it.

Right? And look at it. There's there's something that's really great at summarizing, to ask it for a summarization of all the battles of the hundred years war. Right? It's great at combining all that information. You see now, grade that down to a paragraph. It can do that.

Right? It's great at those kinds of things. If you play with the tools, you'll see it's not I'm I'm still not convinced that graphics are that good yet. I have another have another slide deck where I try to get it to to give me the the images I want. The amount of time spent giving it the prompts to really distill down what it is you want are incredible. And I think we we underestimate that a lot.

But it's really interesting data that Titan partners came out with at the end of last year. And this is student adoption much deeper down that that traditional curve, right, adoption curve. Than educators. And I would actually say this is three or four months old, I would say students are further down that path. And I don't know that educators have moved much.

I think one of the challenges is we're so accustomed to that fear. We're so worried about taking jobs that we're not engaging with the tools themselves. We're still in that denial mode, I think, across the board. There are exceptions. I'm I'm competing with that.

But the biggest challenge, and actually, if you look at the quote, a a ai users are much more likely to have a positive perception of AI if they've even used the tools. Think you've been out and you've used Dolly to try to generate an image, you understand. It's not as good as it's not it's not on a quite yet. Right? It's good that if you say give me a monet painting, it'll it'll do a very nice facsimile, but does have his own style. It's not creating its own artwork yet.

And so those things will progress down the road, but realistically, these are tools. This is nothing different than than, you know, a calculator. Manth degree. Right? And when you look at it as a tool and we understand it has to have somebody holding that tool, you're much less likely to fear it to not use So we give a lot of encouragement to out. Try these tools.

Have your educators, out there using them. I think one of the issues with the adoption that we see too is I have a I have a nineteen year old with the University of Utah. I have a son who's in junior high, thirteen year old. And it's remarkable how much different their perceptions of AI are. My daughter is scared to death to use it unless her professor has been very clear.

With how they use it. And she said she has two professors in her entire first year. Two two professors that have actually been very clear about how they should use AI. The rest, Don't acknowledge it or don't talk to it. And then she's just scared to use it.

Right? The others who are her her mostly English in writing been very clear and said that here's how you can use it. Build it into the process. Use it to get words out of paper as a starting point. Use it to summarize, that's what sets of data. Use it these ways, and that's okay.

My junior high school student, he said, none of my teachers have talked about it, and they haven't even told us when we can't use it, and he truly does not understand the difference between using Grammarly, using chat GBT. Then they're both tools. They're available to them. Why couldn't I use them? They're out there. They're available.

So that guidance towards both educators and students. We've gotta get a lot smarter about how we're telling people to use these. We've gotta get over the fear, embrace the process and actually get them using this. And so there's a lot we're doing as a company. Again, there are things to fear.

I'm gonna that's totally awesome with the fear part. Because there's some realities around this. Loss of student data, we're always gonna be underscoring everything we do. Loss of educator or institution IP. This is a really important piece.

There's a story about a a Samsung programmer in South Korea. We took a piece of proprietary code. And dropped it in the public facing jet GPT to check it for errors. Well, now that code has now become part of the the baseline. It's been absorbed into That large it's your large language model.

It is not protectable. That code is now public domain. There's really interesting aspects of creating images using AI right now. There was a court case last year that said, if I'm an educator, if I'm an if I'm an artist and I use chat, GBT to create art, it is not a dawley or one of the tools to create art. It's a domain because it was created by a large language model.

It absorbed all that information out there. Can't be protected. But like I said, the more you understand how difficult it is to create videos and images that are, you know, at some point, I think there's gonna be a study. If I spend a hundred hours using Adobe Illustrator to create an image, that's protectable to mind. Why spend a hundred hours writing prompts to get AI to get the same image, they're the image I want, It's not protectable.

And I think there's gonna be a a court battle around that. There's a lot of legal issues that remain out there. And so, Ari on the side of protecting all of that IP, all that information is incredibly important. This is what scares me as deep deep fake video and imagery, and I'll add language to that because we just saw our first instance of during the New Hampshire primaries robocalls with an anti generated voice, pretending to be Joe Biden, telling people not to go vote. And they're the we already have, Bill banning the use of AI generated content of his voice.

In robocalls hitting, hitting Washington right now because it's so realistic. It's so hard to find people don't truly understand. And if you're not up to date on what's possible through the eye, you might think, oh, yeah. That was your buddy. He called me.

Right? So we already seen laws changing rapidly around that. Big news and bias. Again, these are trained on millions of trillions of data points that are very good. They can still be subject to bias. If you go out and say, give me a picture of, EO.

We're a fortune five hundred company. It'll send you a picture of a white male. That's the bias. That's fed into it. All those data points still show bias because the other line data points are biased.

So there's a lot of work being done on how we prevent that bias from creeping and how do we keep it from being able to manipulate the results long term? Productive student usage. And this is one area I liked what Sam was talking about. And and I'll be able to plug my friends at at k sixteen because they work with GPT zero, and their approach is really interesting because it instead of being punitive, like, turn it in, tends to be with uh-huh. Tatcha cheating. Yeah.

Right? Don't live that world anymore. We gotta think beyond that because you can't, with reliance point at something and say that this was created by chat unit team, created by AI. There's always a point, a margin of error always. They can't they can claim ninety nine percent. It's not there.

What they can say is and I've actually tricked public facing. I've written I've written I've written badly enough to go put something into, actual checker and valid flag it as AI generated. It wasn't. I wrote it. I wrote it the style of AI, which is not that hard.

So you can always trick these systems. What what the tool that QBT zero and Q sixteen has developed is and has a plug in the canvas. But listen across the entire course, and it says, I'm gonna flag you once in an assignment as possible using it. We're gonna flag you in three assignments, two discussion, discussion posts. And then it's gonna say, you know what? You that may be an error.

You might need a refresher on when to use AI? We may not have a good discussion on proper usage of AI. It's not punitive. It's not attempting to catch that student and throw them out. Attempting to correct their behavior and get them to use the tools correctly. That's one thing I love.

I I was at I was apprehensive, and I sat at their presentation at Educov last year was blown away. I that's that's a much more productive approach. And it's the way we're gonna do it in the future because you simply can't point to a source document and see you cut and paste. That doesn't exist anymore. Original source document doesn't exist when using a generated content.

The digital divide is something that we talk about a lot. Even, you know, chat GPT three is is based off of, several several billion, I mean, companies three. Seventy five billion parameters. Let me consider a million parameters, though. Fifty four is a trillion parameters.

One three one's not. Right? You wanna use trillions? You wanna use that one that knows what happened in the last few years? You gotta pay for it. In education, specifically, like, one of these we wanna avoid in creating, accessibility gap. You don't want to have so we can forward it to have these tools and those that can't deny. So even looking at the the very short term, how do we prevent that? How do we make sure these are scalable? And one of the things I explained is cost at scale.

There are vendors in our space who have rolled out AI tools. Well, they're we know they're based off of the public facing AI models. The cost for those exceeds the cost of their LMS license. So in the short term, they can give those away and get people excited about it. Long term, they're gonna be charging their customers for it, or they're gonna go out of business because that's how that works.

So the cost at scale using these tools, the free version of these tools is Amazing. You haven't played with it all along. When you actually start getting thousands, tens of thousands of users, that cost goes up aggressively. And so talk a little bit more about that, but that's one of the things that we it keeps us up at night a little bit. How do we actually make sure that that we deliver to those? At educo or at instructor Scott last year, we ruled out our guiding principles.

What we wanna stay focused on. So intentional, involving human based problems. Actual educator and student problems. Right? Like I said, there's a lot of vendors out there. They're gonna hammer.

It's not gonna need a nail. One week, they're, student advisor. The next week, they're a smart analytics tool. The next week, they've discovered something else. And there are a lot of use cases we haven't actually touched on yet.

Haven't discovered yet, but we will find for these. But we wanna make sure that we are we're not the just a hammer looking for a nail. We know the the nails. Resign the hammer to fix them properly. Safe.

Everything we do underscore protection of IP, protection of student data. Everything we do. And then the equitable access. How do we make sure these tools scale for everyone? One of the amazing things about Canvas is Teams of Harvard. We're using the same LMS, but my student in, you know, evergreen junior high school using.

That the power of these schools and we need to make sure that we we maintain that equitable access. There's what we've done behind the scenes that I wanna make sure everybody's aware of. Establish our own guiding principles. There's much more about that on our website. We've got a a AI customer council They give us feedback both on our tools, but on our usage of the product.

We have an AI officer who is charge of basically making sure she happens to be our data and and privacy officer as well. She makes sure that all of those elements come together in a very proactive way. We join the Ed safe Alliance and global pledge, which is really education across the globe saying we're gonna use AI responsibly. We're gonna make sure we use these in ways that benefit students. Don't, you know, risk their data.

And then we've actually put all this in our website. We update it constantly. We work with a third party, s I on actually advising, the federal government on policy around AI. So we put together the things got any principals. Seven got any principals with a number of other with Google, with loom with Microsoft, reset these meetings and actually come up with these and say, look.

Well, the government doesn't need your reaction and and, you know, starts trying to ban AI. Let's let's talk about how you get good principles for these in education. And then, like the I think they could work on safe security and trustworthy. This is the direct outcome of that work that we did with SIA. So this was released.

And it's basically the first round of what could in the future be legislation to govern AI. This is the first it's all recommendations. This is the first step. And so as we follow that, we wanna make sure both here and globally we're affecting that that governance. And if you haven't already read it, there's an AI guidance, policy from the opposite education technology.

It's actually really, really good. It aligns very well with the priorities that we'd outlined. Right? One of these pieces that, they focus on is education specific guidelines and guardrails. So the one thing I wish they would have touched more on is training of educators. That piece, I think, is one of the biggest aspects.

We need to make sure our educators understand these tools are using these tools so that they are passing those off to students as well. So that's that's my thirty thousand foot level. One of the things that's really amazing about AI is I've had a very similar conversation with this in Manila and Sydney. And modern Mexico, we're all facing the exact same challenges, and we're all facing them together. And so as we're collaborative, as we have these discussions, we learn from each other on what right, ways to use AI, the wrong ways to use AI innovative ways to solve student problems, to solve educator problems.

And that's one of the things that Zach spent a lot of time doing as well is actually having these conversations and explore these new use cases. So I'm gonna head it off to Zach. I was gonna show you a little bit more. Alright. Hello, everybody.

Thank you so much for having us as Ryan mentioned I'm the chief architect here at Instructure. Everything he said I think was true except for me knowing what I'm talking to. So, I would love for this as much as possible to be a conversation. So, as I talk, if you've got questions, if you've got comments, please just slide me down and let's have the discussion, though I think we probably will have some time at the end of the questions as well. I'm, I'm curious.

By show of hands, How many folks have acceptable use policies at their institution for large language models or AI tools? See, be some sort of okay. I'm getting there. Good. Good. How many of you have AI initiatives or projects that are going on right now.

I mean, you okay. We've got more hands there. What are some of the things that you're doing with AI? Any any volunteers? Yes. We created a PD course for faculty on conversational AI. And, and we have started a AI residential commission where we're going to be looking at different aspects from, like, teaching and learning applications to student affairs applications and kind of getting into some of the use cases and the ethics behind it.

That's great. Have you seen a shift in educator perception or company feedback? Those stages, you know, were described. We're we're we're very accurate. I think I think in some cases, it's just across the spectrum. Right? Like, on one side, we have humanity's faculty that are like, well, we're going back to Blue Book.

You know, doing essays in class in your blue book. And then on the other side, we've got faculty that are all in. They're, exploring the the role of personas, right, developing persona prompts that act as, like, thought partners for students. And and that's kind of the wide gap that we have right now. And trying to figure out how do we do that in a way that students don't lose the critical thinking and the voice their own voice in that process is like assisted versus generative, right, like, using it to enhance what you're already capable of.

Kinda like going to a tutor, right, because you get stuck on a thesis statement, versus just going and then, like, getting a piece of statement for you. You know, because there's no guardrails in place, for that. Other than whatever detection tools that we have, but as you know, these are not perfect, you know. I Just to demonstrate, I've taken an entire, let's say, and shown how it comes back as human, right, just because a a clever prompt, you know. So it's under the wild frontier right now.

That's right. Yep. Any other spaces? Yes. We do some research already. So we've been doing research ongoing, but what we're trying to do is scale AI learning out to our general students out of the world, we're also looking at policy both for students and for how our university adopts technologies.

So we're trying to incorporate the, evaluating AI in our technology selections. Oh, that's great. I think, yeah, as Ryan talked about, and as was said, here. Right? I I think this is a moment that, we've it's, I guess, a tool that I think of like calculator, right, where when I'm when I'm doing calculus, I'm not worried about addition and subtraction division because I've got the calculator to do the easy parts for me and it allows me to focus on the harder things. And this is really, almost a calculator or, language.

The challenge, I think, is that, we've gotta figure out what is the appropriate way to use these tools for students? Because these are tools that I think are going to separate, employable, successful people in the future for people who are gonna struggle. Right? And when we talk about jobs that AI is gonna take, could be wrong here. I've I've certainly been wrong almost every single time I've tried to predict the future. But I I think it's going to be rare that we see a situation where AI swoops in and just decimates an entire industry. Right? I think what's probably gonna be far more common is that every industry says, My really great employees use AI and they're ten percent more effective.

And that means I need ten percent less people. Right? And that's a challenge. I think we've got to solve a society. But I I think it's something that we've got to think about as we think about how we use these tools to empower educators and then how we also use them, to improve student outcomes. So I'm gonna talk today a little bit about what we're doing specifically at instructure with these rules.

And again, as we as we run through our our process here and as we run through individual features, please, you know, raise your hand if you've got questions. So, most of you, I think are familiar with Canvas' release policy. We follow that still. You know, we've got releases coming out every three weeks. What's going to change with AI powered features these first two buckets here? Alright.

So as part of my job as chief architect, I manage our research. And so that team works in actually six week iterations typically. Take a research project, build a proof of concept and then write a, an internal white paper or research paper. Now, for research that is really promising or that we think is valuable, we're going to go ahead and push that work out into what we call a limited data. So that's putting things out into the product that are going to be available to a small number of customers.

So think, you know, if this is thing that you're interested in. Please, I'll include my contact information afterwards. You can reach out. But this is really a part opportunities for us to collect really deep feedback to me regularly with you. You'd be talking with researchers, to make sure if this is a feature that is even sustainable.

Ryan talked a lot about equity, and about cost challenges. It's something that keeps me up at night. And I think the future of these tools is not really exciting to me if they get locked behind these insurmountable cost increases. So we're trying to keep as much of it free as we can. And if we get through that process, then we hand it over into our core Canvas development pipeline where we scale the feature up and make it generally available to everyone.

As mentioned, all the features that we talked about today are going to be in the research and limited beta buckets. And I'll do my best to distinguish between the two and to talk about where we are at timelines as we know them. But know that again, a lot of this is fluid because we have a lot of great ideas we develop and realize, geez, too expensive. Okay. Though our AI strategy, is tiered here.

The first here, talk about context and data. So making sure that we have a foundation that we can use to build responsible AI features on top of. Now that means, that we're taking content inside of Canvas, and we are encoding it in a way that is machine readable and then captures the semantics of it. So that that content is available to be accessed by large language models during prompting to get better outcomes. Now, we do that for a few reasons.

First, we do it to reduce, you know, hallucinations. Right? We really don't want, AI models saying whatever they wanna say. We want them saying what the instructor in the classroom wants them to say. Because what we've heard repeatedly, in our discussions with faculty and with students is that nobody wants the third weird robot voice in the classroom. Right? We really wanna use these tools to amplify and extend educator voice and to make it possible for educators to connect with students in ways that were not possible prior, due to scale challenges.

Now, you'll see this. We'll talk a little bit about search, and I know Sharon mentioned that search is actually built on this effort. And it's critical to everything that comes afterward because it allows us to inform these models in, you know, safe responsible ways. The other piece of that, is making sure that data that we send to these models doesn't go anywhere we don't want it to go that it does not get used to train models or become part of some future data set. And so we're working really closely with Amazon to deploy a safe regional large language models inside of our own cloud And then we're going to be using those as much as we can.

That helps us with cost, that helps us with security. And it makes sure that all of the guarantees that you have with other features around protection of PII data residency, regionalization, and so forth, we can maintain for you as we keep moving with AI. Now on top of that effort, we look at workflow enhancement. We're focused a lot on educator efficiency. So taking things that today pasta a lot of time and a lot of mental load and trying to offload those where we can and, teacher workflows so that educators can spend more time doing the things that they love.

Right? Now, the way I describe these, I'm sure none of you have this problem, because everything you do in your jobs is entirely fulfilling and satisfying. But, I occasionally, will be asked to do something at work that takes a lot of cognitive load And it takes so much cognitive load, but but that by the time I'm done with it, I just don't have the energy to do the things I want to do anymore. Right? We know that some of those things exist for educators. And those are the things we're really trying to tackle here, and we're trying to do it in ways that, as Ryan mentioned, keep human in the loop, So we wanna make sure that educators remain in control. And I think that's not just saying educator.

Here's output from an AI model, say yes or no. Right? Because If I've got a big green button that says yes and I have a tiny little button in the corner that says no, I'm I'm probably usually gonna click the green button. So it's making sure that our workflows encourage educators to be an active participant in the process. And then also making sure that these things exist kind of naturally in current workflows. My dream with a lot of the features that we talk about today is that educators and that that you all get excited about the feature without even knowing that it's AI.

Right? I just want the best LMS in the world. And the most extensible and portable LMS in the world. And I really don't care, to trumpet the technology behind it, right, whether it's AI, whether it's not. And then on top of that, we're making sure that as we build features around large language models, we're improving our API works surface. We're improving our LTI launch points to make sure that we're not the only ones who can build stuff with AI inside of campus.

We wanna make it easier. For the other partners that you work with and with our partners to go ahead and build features that take advantage of this other stuff so that campus continues to be the hub of teaching and learning, but that you are not dependent entirely on us for innovation. That you're able to innovate, you're able to work with other partners to innovate school. K. Any questions about this before we talk specific features? Comments? Yes.

I just wanted to, you know, share an interesting thing, that happened recently. So I have a a professor at ByU who teaches coding to undergrads, and he is one of those on the end of the spectrum where he embrace embraces the use of of TPT and others. That for the students to write their code. And then he said, but then I use GPT to grade their work. So he's like it's like they're pretending to write the code and not pretend they continue to grade it.

Oh, that's amazing. The very strange, just a very, very strange, place we find ourselves in. Yeah. The the most thing I heard the other day was we need to find ways for ai for ai to enhance learning. Not avoid learning.

And that model, if you're like apply that to a student, is this helping you learn or are you avoiding learning? I think for students, even understanding that kind of paradigm is helpful. I think that that highlights something else. So thank you. Because I think what we found here is that as we build these features, there's a really wide range of opinions from institutions from individual educators, right, where I've had literally two different educators. One told me, you know, the most important thing you could build for me would be a grading assisted.

Right? Build something that helps me grade papers. I had another instructor tell me, or the love of all things, good and beautiful. Do not build a grading assistant. Right? It is my job solely to grade these things. And so as we build these features, we're going to see a parent as we can be, about what model we use, what data goes into it, what data comes out, and provide as much control as we can to institutions and individual faculty.

So we're going to give you the information to ultimately make the decision because the decision belongs with you, right? You, you know your campus is better than we do. Wanna give you the tools to be successful, but we're not gonna force any of this stuff on. K? Great. So let's talk features. The first as Sharon mentioned is this semantic search.

So the thought here is that, you know, we can add search to Canvas, which is exciting. But that we're able to do it using some of the fallout from large language models, meaning that we're able to take natural language and encode it in a way that's machine understandable. The example here I give is now if I had to tell this room, the cat would not fit it in the suitcase because it was too big. We all know that it is the cat. Right? Now if I say the cat would not fit in a suitcase because it was too small, Now suddenly the it is the suitcase.

Now that's easy enough for for us to understand, but in my lifetime, that's been incredibly difficult for computers understand. We talk about language. Right? And the the real key to the success of large language models here is what is called better attention their ability to pay attention to everything around the current word that they're looking at to understand relationships and context. And when we apply that to search, means we're able to go into Canvas and ask questions like, what did my instructor say about the mandolin and get back search results about guitars because the AI and campus now understands that a mandolin and a guitar are both stringed instruments. And so if there's nothing in the result set about a mandolin, but there is something about a guitar.

That's probably what the student meant. K? So this feature is in limited beta right now. It works presently on have this patient. And as we scale up, the beta will continue. We'll add assignments, discussions, and so forth.

And again, this is really key because once we've got all of the information in Canvas indexed for search, it means it's indexable to be able to pass into other features that we build, again, to amplify educator voice to reduce hallucination, and to make sure that when we use AI, it is repeating and it is engaged in a way that the instructor wants it to be and doesn't have its own agenda. Yes. Does that mean global findry places coming for teachers? Oh, no. Good question. Unfortunately, I guess the good news with semantic search is that it's natural language Right? We don't need to use bully inquiries.

We don't need to use other things. The trade off there is that it's not really a piece of infrastructure that we could build the find and replace on. Because it's using semantics and not actual characters. Sorry. Okay.

Good question, though. Yes. Okay. Any other questions, comments about this one? Yes. Gonna be available? You said it's available in beta now? It's in limited beta right now, so it's something that we can turn on institution by institution.

As we collect feedback and figure out costing. So we don't have a final day of student will be available. But this one's really promising. So I would expect that, we'll have more information for you by instructurecon. And this is one of the features that it's not extra.

That's right. That's right. Yeah. This one is is foundational to what we do, and we think right now all indications are that, yeah, there's there's no expert call here associated with And will the search just inside of a class or, like, right now, I'm paying atomic search. Yep.

And we're trying to eliminate these extra cost tools that really work as well. Yeah. Yeah. So right now, limited beta is just inside of a course. We know that there's a need for account level search, and so as we scale up the plan is to add the account level.

Yes. To get access to it, we just contact that good note. I would contact your CSM yep. And then they'll probably forward the request to meet you. So you can also, email me and I'll be I'll put my email up again.

Alright. Yeah. Thank you, Luke. Look at the next feature here, and I I will say this one. I could talk all day about this.

I get so unreasonably excited about it. That, I was I was visiting a university two weeks ago, pulled this slide up, and I said, okay, everybody. I've gotta gonna be as calm as I can be. I get really excited about search. And, Chris Ball took his head and he said, yeah, he gets too excited answer.

Let's see if you can understand how frustrating search can be and how down in two dimensional, like, traditional search engines are. I that's why do you get so excited about it? Because you're like, it's smart. It actually gets the the right answers. It's pretty amazing. And I do think, again, it powers a lot of what's coming here.

So the next thing that we're working on, which is also in limited beta as conversational analytics. The idea here is that educators can build custom reports, without needing to write SQL or using any type of report builder, they can just ask a question and get back a report. That that they can save and rerun. Sharon talked about this one as well. I I think what I get this feature alone, I think it's pretty exciting.

I think what I'm really excited about is the class of features that this represents, right, where previously if I wanted to write a report, What did I have to do? Right? I had to go learn enough SQL to be dangerous. I had to do a lot of trial and error. I may have had to learn a a particular report building tool now can just ask a question and offload all of that domain expertise onto a large language model. And so what we'll see, I think over time in AI is this expertization of expertise in a lot of domains that were previously very difficult, for people to get started in. Another application of this type of approach is something we announced in Instructure Con, if you were there, which was, natural language page design.

Right? So similarly being able to describe a page and have the large language model write the HTML for you. Now, when we talk about the transient nature of all these features. I'll tell you candidly. We started down the road on that one, found out it was going to be incredibly expensive. We would have had the charge mark for it.

So we're on pause. We are gonna revisit it when we work on the blockade. So hopefully that works. And again, I I promise you that I'll be as transparent as I can with all these, but know that with all of them, is a possibility that either they don't work or they get to ex k. Any questions about conversational analytics? Comments? The next one is, oh, we do have a question.

Yes. Yeah. Oh, we use a lot of sequels to query We only have a stat one with one static two with with this conversational analytics. Are you looking at that? That's right. Yep.

So conversational ethics is powered by Camp State two. All those tables that are available there, this will query. Okay. Next one is content translation. So a large language models, it appears that are very good, translation relative to tools that came before.

Right. Today, you know, that the Canvas interface is localized. Microsoft's immersive reader provides the ability to translate page content in real time. We're looking at taking these models and using them to extend that translation capability of human written content in canvas through assignments, discussions, quizzes, and so forth in Safeway. So if you've got students who use English as a second language or students who prefer to learn in a different language, this becomes possible for with this feature.

Yes. Would you expect that to live to persist in the LMS because I could see that living in a browser just as easily. Sure. Yeah. Yeah.

It could. This will be a button in the LMS today. Could change. So, in research, I know, to your point, our, our engineer also wrote a Chrome extension do the same thing. I know Chrome provides translation capabilities as well.

Okay. The other stuffs. K. Great. I know Sharon talked about offline, access.

This is a place too where, you know, we're using a model that's small enough that we had to put on phone. So if this one goes well, we would expect that you'd see this offline access as well. We can even run it in a browser, but I think you have to download, like, two gigs of model weights, which kind of defeats the purpose of going offline. So So is it going to the system? Obviously, with the Spanish class weekly. Great things.

Yeah. So, and this tool has been on by the student and faculty and, like, is it the account level of the support level of how the feature is turned on? Yeah. Good question. So we, I don't think we have an answer on done at a counter course level. Alright, expectation is that it would be used by faculty and students.

So Both of the institutions I've talked with about this have been in Europe and being specific, and their feeling is they love the idea of students being able to read and start her author content in their own language, but then immediately afterwards, the feedback has been now we want students to be able to author content in their own language and then have it translated for the instructor. So I I think we would probably provide I guess my question is leading towards those courses where it's teaching a language. Sure. Yeah. Yeah.

That's a good way. Yeah. And I I do think we do need to, again, put tools in the hands of educators. It sounds like we probably would need some course specific ability to turn it off. I mean, I taught Spanish.

Let's be honest with you for using Google Translate even before. That's right. I'll say I was just in Hungary last week, and my Hungarian is so abominable because Google Translate is terrible at Hungarian. Turns out large language models are a little bit better. So, it's able to finally get a table and make a reservation, and I'm hearing.

Audio is, something we've experimented with. It's in that research phase squarely for us, probably future. So we've got our our studio team right now is actually using AI tools to provide, automatic captioning of all video content And when they've got that done, one of our backlog research projects is then taking that stuff, translating it in audio and then re you know, overlaying it on top. Yeah. Okay.

Next one. Discussion summaries, Sam talked about all the really exciting stuff with discussions. This one, we're hoping just layers nicely on top of the, all the work that she and Rajeev have done. The idea here is that we're able to take a discussion thread, summarize down key points key questions and then provide those so before you go into the classroom, without reading the entire thread, you can see what it is that you need to follow-up on. Hi.

If you used Amazon recently, they've been doing this with product reviews now. Right? At the top, they'll say customer say. We're looking at something like that. So, this one, again, the team is, researching has been pretty promising. I expect it will go into limited beta, but it's not there.

I will say this is one of those workflow enhancements I talk about. Right? I think it's a nice feature. It doesn't advertise AI. It doesn't have sparkles. It doesn't have anything else.

Right? It's just something that makes people much better that we could was difficult to do. We're also looking at, rich content editor integrations. So these are, just kind of a grab bag of standard large language model things you see them do or they can summarize content, expand content, change the tone of content. Right? We've got a tool where you as you author content, you push a button, and then it pops open a list of suggested changes, and you can approve or deny them. Right? So it's like a peer review, for educators.

This one is in, development right now for limited beta. My guess here is we'll probably get this out to see how people use it and then fine tune our approach and just focus on the few tools that people get really excited about. See. Writing. We got we got half of our new next slide.

Here we go. Okay. Next one is outcome alignment. If you're using Canvas outcomes today, the idea here is that given a piece of content, we can automatically recommend which of your outcomes mapped to it. We demoed an early version of this and it's structured con last year.

This is powered using, again, that semantic search technology. I'm I'm actually proud one because we take the course content. We ask the large language model to intentionally lie to us by imagining outcomes for the content. Right? So it just creates some outcomes out of o'clock and bits, and then we map those fake outcomes against the real outcomes, find which ones are most closely related. We're working with Amazon right now, on a version of this for K-twelve standards.

So looking at state standards and things like NGSS and common core, and once that work is done, then we'll be able to layer campus outcomes on top of this and really excited to look at credentialing and badging as well. I think this would be a huge time saver for educators. Again, it's one of those things that for me, it's very high cognitive load, but very low satisfaction, right, tagging things, just, or no, not a lot of fun. Okay? Yeah, see much opportunity for this, in higher ed. I guess where where would credentialing and higher ed work and outcome alignment? Yeah.

So I think where I see this playing is that as content, you've got content to your system today that is not aligned to your your credential program or is not tagged with outcomes, but you have an outcome taxonomy that you're using as an institution. You can kind of very rapidly bring everything into that framework. Right. So, piece of content here suggested outcomes like the and then move forward. And then I think doing that in the future gives us a lot of opportunity to, go beyond just tagging them, but look at actually building out credentialing programs and outcomes.

Right? Because we could, again, imagine what that program ought to be at the institution level and make suggestions. I saw this at instructure Con. It blew my mind. Okay. This particular call.

I'm so happy to hear it. Okay. Yeah. I'll be an engineer who worked, then I'll be very excited. Alright.

Next one, we also downloaded instructurecon last year. We're talking with the quizzes team right now about what this looks like, in production. But it's, quid's authoring assistance. So taking right now things like correct and wrong answer rationales, that again take a lot of time to author and really just duplicate content that's somewhere else in the course, and then writing those or making suggestions. So this again, one where it's really important that we have that semantic mapping underneath everything, because we take that course content and then use it to write those corrected wrong answer, rationale for instructors, before, and then give instructors a chance to side off on them.

Again, no, no date on this one. But we do have a a really compelling proof of concept here at, so really mean, we're probably going to, we have limited beta here. We'll probably just move into productionalizing of the chains ready to pick up. Okay. Questions here.

And then I I think this is the last one in the deck. Talked about that approach where we want to enable partners to be successful. So we're also working at how we extend LTI to make it easier to build large language model tools, alongside Canvas. So what we're thinking of here really is improving the number of LTI launch points So you've got launch points where previously they may not have made sense, but now if I have some type of co pilot or chat box, you're gonna want them want that on almost every page in Canvas. And then how do we get that bot the right context from Canvas so that it can seed its own prompts, to give you better more accurate results.

And again, this one, in research right now, we do have a hack week coming up. We're gonna try to to really focus on this. So, hopefully, again, by instructurecon, we'll have something concrete here for you. An example of that last one and why it's important, Conmigo. So I'm talking with Khan Academy quite a bit where they have SA offering and feedback tools for students, and then they also have a co pilot that provides you know, rewriting of content, explanations to student, so more really benefits from that type of, LTI extension and, kind of context sharing.

And as Sharon mentioned, we've got a number of other tools in our AI marketplace, where we're really trying to find partners that agree with us ideologically about AI, who are really committed to solving actual problems, and not just, shouting AI from the rooftops. So, we'll continue. I think find those people, and to make sure that canvas, is the best place for them to build their features. And then with that, that's all I've got. Please.

Let's keep the conversation going. You've got my email here. Talk to remember Zach at instructure dot com. If you are interested in any of these limited beta experiences or partnering with us, on a future research project or in one of these, please reach out Again, as I manage our research team, very happy to help facilitate those conversations. And if you have any patrons of dogs and costumes, I love pictures of dogs and costumes.

So, always happy to talk about AI. If you've got again, particular questions about these or if you just wanna have a deeper conversation about where you think the market is going or what your institution could be? Any other questions? Yes. Have you looked at using the the course content to generate assessment questions itself? Yes. Yes. Is that of the course content? We do have some research there.

So that was, believe it or not. Our very first large language model research project, the pre dated chat GPT. Back then, it was a little dicey okay today. Yeah, I I expect it's something that'll probably pop up in the future. But, yeah, right now because we're focused on those just come core workflow enhancements that feel a little safer, and get AI in front of educators.

Okay. We're good. Excellent. Alright. Thank you.

Take care. I will I will put a little more part in there. Most of mobile and I do are in structure cast, which there's a sticker on someone's table. We also talk about that been on quite a bit. We talk about AI.

We talk about credentials. We talk about a lot of Britney couldn't. So That needs that. Let's actually let's do a ten minute break right now. Back.

Got it. After. So we'll have yeah. We're gonna talk about p sixteen.
Collapse